Análisis de clases latentes: modelos seleccionados, sin pueblo originario, pero con año y recategorización de edad mujer y semana gestacional hito 1, caracterización de clases y medidas de ajuste (glca)
script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js" $(document).ready(function() {
$('body').prepend('<div class=\"zoomDiv\"><img src=\"\" class=\"zoomImg\"></div>');
// onClick function for all plots (img's)
$('img:not(.zoomImg)').click(function() {
$('.zoomImg').attr('src', $(this).attr('src')).css({width: '100%'});
$('.zoomDiv').css({opacity: '1', width: 'auto', border: '1px solid white', borderRadius: '5px', position: 'fixed', top: '50%', left: '50%', marginRight: '-50%', transform: 'translate(-50%, -50%)', boxShadow: '0px 0px 50px #888888', zIndex: '50', overflow: 'auto', maxHeight: '100%'});
});
// onClick function for zoomImg
$('img.zoomImg').click(function() {
$('.zoomDiv').css({opacity: '0', width: '0%'});
});
});
<script src="hideOutput.js"></script> $(document).ready(function() {
$chunks = $('.fold');
$chunks.each(function () { // add button to source code chunks
if ( $(this).hasClass('s') ) {
$('pre.r', this).prepend("<div class=\"showopt\">Show Source</div><br style=\"line-height:22px;\"/>");
$('pre.r', this).children('code').attr('class', 'folded');
} // add button to output chunks
if ( $(this).hasClass('o') ) {
$('pre:not(.r)', this).has('code').prepend("<div class=\"showopt\">Show Output</div><br style=\"line-height:22px;\"/>");
$('pre:not(.r)', this).children('code:not(r)').addClass('folded'); // add button to plots
$(this).find('img').wrap('<pre class=\"plot\"></pre>');
$('pre.plot', this).prepend("<div class=\"showopt\">Show Plot</div><br style=\"line-height:22px;\"/>");
$('pre.plot', this).children('img').addClass('folded');
}
}); // hide all chunks when document is loaded
$('.folded').css('display', 'none') // function to toggle the visibility
$('.showopt').click(function() {
var label = $(this).html();
if (label.indexOf("Show") >= 0) {
$(this).html(label.replace("Show", "Hide"));
} else {
$(this).html(label.replace("Hide", "Show"));
}
$(this).siblings('code, img').slideToggle('fast', 'swing');
});
}); Cargamos los datos
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 652893 34.9 1332031 71.2 887330 47.4
Vcells 1171063 9.0 8388608 64.0 2075161 15.9
#cargar glca con y sin resultado distal
load("data2_lca2_adj4_alt_sin_po_2023_05_12.RData")
Cargamos los paquetes
knitr::opts_chunk$set(echo = TRUE)
if(!require(poLCA)){install.packages("poLCA")}
if(!require(poLCAParallel)){devtools::install_github("QMUL/poLCAParallel@package")}
if(!require(compareGroups)){install.packages("compareGroups")}
if(!require(parallel)){install.packages("parallel")}
if(!require(Hmisc)){install.packages("Hmisc")}
if(!require(tidyverse)){install.packages("tidyverse")}
try(if(!require(sjPlot)){install.packages("sjPlot")})
if(!require(emmeans)){install.packages("emmeans")}
if(!require(nnet)){install.packages("nnet")}
if(!require(here)){install.packages("here")}
if(!require(doParallel)){install.packages("doParallel")}
if(!require(progress)){install.packages("progress")}
if(!require(caret)){install.packages("caret")}
if(!require(rpart)){install.packages("rpart")}
if(!require(rpart.plot)){install.packages("rpart.plot")}
if(!require(partykit)){install.packages("partykit")}
if(!require(randomForest)){install.packages("randomForest")}
if(!require(ggcorrplot)){install.packages("ggcorrplot")}
if(!require(polycor)){install.packages("polycor")}
if(!require(tableone)){install.packages("tableone")}
if(!require(broom)){install.packages("broom")}
if(!require(plotly)){install.packages("plotly")}
if(!require(rsvg)){install.packages("rsvg")}
if(!require(DiagrammeRsvg)){install.packages("DiagrammeRsvg")}
if(!require(effects)){install.packages("effects")}
if(!require(glca)){install.packages("glca")}
Call:
glca(formula = f_preds, data = mydata_preds3, nclass = 7, n.init = 500,
decreasing = T, testiter = 500, maxiter = 10000, seed = seed,
verbose = FALSE)
Manifest items : AÑO CAUSAL EDAD_MUJER_REC PAIS_ORIGEN_REC HITO1_EDAD_GEST_SEM_REC MACROZONA PREV_TRAMO_REC
Categories for manifest items :
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
AÑO 2 3 4 5 6
CAUSAL 2 3 4
EDAD_MUJER_REC 1 2 3 4 5 6
PAIS_ORIGEN_REC 1 2 3
HITO1_EDAD_GEST_SEM_REC 1 2 3 4 5 6
MACROZONA 1 2 3 4 5 6
PREV_TRAMO_REC 1 2 3 4 5
Model : Latent class analysis
Number of latent classes : 7
Number of observations : 3789
Number of parameters : 195
log-likelihood : -30555.55
G-squared : 5518.95
AIC : 61501.1
BIC : 62717.87
Marginal prevalences for latent classes :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
0.06645 0.15167 0.11304 0.33570 0.09050 0.04473 0.19791
Class prevalences by group :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
ALL 0.06645 0.15167 0.11304 0.3357 0.0905 0.04473 0.19791
Item-response probabilities :
AÑO
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 0.9221 0.0000 0.0000 0.0779 0.0000
Class 2 0.1784 0.1893 0.2014 0.1922 0.2387
Class 3 0.1676 0.2244 0.1903 0.2288 0.1888
Class 4 0.1502 0.2356 0.1962 0.2328 0.1852
Class 5 0.1370 0.2212 0.1974 0.2691 0.1753
Class 6 0.1332 0.1707 0.2579 0.1188 0.3194
Class 7 0.0865 0.2783 0.1385 0.2446 0.2521
CAUSAL
Y = 1 Y = 2 Y = 3
Class 1 0.6980 0.3020 0.0000
Class 2 0.0004 0.0000 0.9996
Class 3 0.0663 0.9337 0.0000
Class 4 0.0978 0.9022 0.0000
Class 5 0.3201 0.6799 0.0000
Class 6 0.0763 0.0000 0.9237
Class 7 0.9596 0.0404 0.0000
EDAD_MUJER_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0675 0.0341 0.2857 0.3011 0.2654 0.0461
Class 2 0.0017 0.3683 0.2948 0.2090 0.1070 0.0193
Class 3 0.0000 0.0000 0.0810 0.2927 0.5530 0.0733
Class 4 0.0000 0.0411 0.2812 0.3292 0.2821 0.0663
Class 5 0.0000 0.0166 0.2900 0.3111 0.3201 0.0621
Class 6 0.0000 0.1763 0.3634 0.2474 0.1837 0.0291
Class 7 0.0000 0.0132 0.2456 0.4139 0.2895 0.0378
PAIS_ORIGEN_REC
Y = 1 Y = 2 Y = 3
Class 1 0.0715 0.8424 0.0861
Class 2 0.0000 0.9922 0.0078
Class 3 0.0000 0.9269 0.0731
Class 4 0.0000 0.9876 0.0124
Class 5 0.0000 0.0000 1.0000
Class 6 0.0000 0.0000 1.0000
Class 7 0.0000 0.8740 0.1260
HITO1_EDAD_GEST_SEM_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0402 0.0195 0.1281 0.2324 0.2853 0.2944
Class 2 0.0087 0.7791 0.2122 0.0000 0.0000 0.0000
Class 3 0.0073 0.0026 0.6217 0.2524 0.0978 0.0182
Class 4 0.0059 0.0000 0.3329 0.3597 0.2238 0.0776
Class 5 0.0057 0.0056 0.3261 0.3654 0.2321 0.0651
Class 6 0.0220 0.7886 0.1893 0.0000 0.0000 0.0000
Class 7 0.0740 0.1929 0.2337 0.4994 0.0000 0.0000
MACROZONA
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0317 0.1823 0.3760 0.1489 0.0885 0.1726
Class 2 0.0035 0.3853 0.1756 0.1451 0.0863 0.2042
Class 3 0.0000 0.7081 0.0988 0.0583 0.0666 0.0682
Class 4 0.0000 0.3346 0.1712 0.2124 0.0873 0.1945
Class 5 0.0000 0.6184 0.0732 0.0354 0.2410 0.0319
Class 6 0.0059 0.5569 0.0811 0.0161 0.2994 0.0406
Class 7 0.0000 0.4072 0.1540 0.1656 0.1150 0.1583
PREV_TRAMO_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 0.0202 0.0390 0.6238 0.2717 0.0453
Class 2 0.0018 0.0701 0.7233 0.1996 0.0053
Class 3 0.0000 0.8254 0.0000 0.1655 0.0091
Class 4 0.0008 0.0107 0.6362 0.3523 0.0000
Class 5 0.0056 0.0000 0.6072 0.2916 0.0956
Class 6 0.0294 0.0044 0.5228 0.1608 0.2826
Class 7 0.0000 0.0933 0.5564 0.3503 0.0000
#https://rdrr.io/cran/glca/src/R/summary.glca.R
#Class prevalence by group mean(best_model_glca_w_distal_outcome$posterior$ALL$`Class 1`)
#Item-response probabilities (most likely response) print(sapply(best_model_glca_w_distal_outcome$param$rho[[1]],
#print(sapply(best_model_glca_w_distal_outcome$param$rho$ALL, function(m) apply(m, 1, which.max)))
#function(m) apply(m, 1, which.max)))
Figure 1: Selected Model
Figure 2: Selected Model
Figure 3: Selected Model
Figure 4: Selected Model
Figure 5: Selected Model
Figure 6: Selected Model
Figure 7: Selected Model
Figure 8: Selected Model
rho_glca<-
do.call("bind_rows",best_model_glca$param$rho$ALL) %>%
t() %>%
round(2) %>%
data.table::data.table(keep.rownames = T) %>%
magrittr::set_colnames(c("variables", paste0("Class",1:7))) %>%
tidyr::separate(variables, into=c("var", "prob"), sep=".Y =")
lcmodel_glca <- reshape2::melt(rho_glca, level=2) %>% dplyr::rename("class"="variable")
traductor_cats <- readxl::read_excel("tabla12.xlsx") %>%
dplyr::mutate(level=readr::parse_double(level)) %>%
dplyr::mutate(level= dplyr::case_when(grepl("CAUSAL",charactersitic)~ level-1,T~level)) %>%
dplyr::mutate(charactersitic=gsub(" \\(%\\)", "", charactersitic))
lcmodel_glca<- lcmodel_glca %>%
dplyr::mutate(pr=as.numeric(gsub("[^0-9.]+", "", prob))) %>%
dplyr::left_join(traductor_cats[,c("charactersitic", "level", "CATEGORIA")], by= c("var"="charactersitic", "pr"="level")) %>%
dplyr::mutate(CATEGORIA= dplyr::case_when(var=="AÑO" & prob==" 1"~"Perdidos", T~CATEGORIA))
lcmodel_glca$text_label<-paste0("Categoria:",lcmodel_glca$CATEGORIA,"<br>%: ",scales::percent(lcmodel_glca$value))
zp3 <- ggplot(lcmodel_glca,aes(x = var, y = value, fill = factor(pr), label=text_label))
zp3 <- zp3 + geom_bar(stat = "identity", position = "stack")
zp3 <- zp3 + facet_grid(class ~ .)
zp3 <- zp3 + scale_fill_brewer(type="seq", palette="Greys", na.value = "white") +theme_bw()
zp3 <- zp3 + labs(y = "Porcentaje de probabilidad de respuesta",
x = "",
fill ="Cateorías de\nRespuesta")
zp3 <- zp3 + theme( axis.text.y=element_blank(),
axis.ticks.y=element_blank(),
panel.grid.major.y=element_blank())
zp3 <- zp3 + guides(fill = guide_legend(reverse=TRUE))
zp3 <- zp3 + theme(axis.text.x = element_text(angle = 30, hjust = 1))
#print(zp1)
ggplotly(zp3, tooltip = c("text_label"))%>% layout(xaxis= list(showticklabels = T),height=600, width=800)
Figure 9: Selected Model
Call:
glca(formula = f_adj, data = mydata_preds3, nclass = 7, n.init = 500,
decreasing = T, testiter = 500, maxiter = 10000, seed = seed,
verbose = FALSE)
Manifest items : AÑO CAUSAL EDAD_MUJER_REC PAIS_ORIGEN_REC HITO1_EDAD_GEST_SEM_REC MACROZONA PREV_TRAMO_REC
Covariates (Level 1) : outcome
Categories for manifest items :
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
AÑO 2 3 4 5 6
CAUSAL 2 3 4
EDAD_MUJER_REC 1 2 3 4 5 6
PAIS_ORIGEN_REC 1 2 3
HITO1_EDAD_GEST_SEM_REC 1 2 3 4 5 6
MACROZONA 1 2 3 4 5 6
PREV_TRAMO_REC 1 2 3 4 5
Model : Latent class analysis
Number of latent classes : 7
Number of observations : 3789
Number of parameters : 201
log-likelihood : -30486.95
G-squared : 7490.144
AIC : 61375.9
BIC : 62630.11
Marginal prevalences for latent classes :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
0.06215 0.15180 0.15427 0.30810 0.08183 0.04493 0.19692
Class prevalences by group :
Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
ALL 0.06215 0.1518 0.15427 0.3081 0.08183 0.04493 0.19692
Logistic regression coefficients :
Class 1/7 Class 2/7 Class 3/7 Class 4/7 Class 5/7
(Intercept) -1.5891 -1.0082 -1.774 0.9162 -1.0703
outcome1 0.5077 0.8534 1.683 -0.6036 0.2284
Class 6/7
(Intercept) -2.5716
outcome1 1.2254
Item-response probabilities :
AÑO
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 0.9641 0.0000 0.0000 0.0359 0.0000
Class 2 0.1783 0.1893 0.2013 0.1922 0.2389
Class 3 0.1631 0.2133 0.1836 0.2306 0.2094
Class 4 0.1515 0.2403 0.1975 0.2383 0.1724
Class 5 0.1370 0.2201 0.2041 0.2709 0.1679
Class 6 0.1332 0.1706 0.2587 0.1183 0.3192
Class 7 0.0872 0.2769 0.1354 0.2465 0.2540
CAUSAL
Y = 1 Y = 2 Y = 3
Class 1 0.7020 0.2980 0.0000
Class 2 0.0007 0.0000 0.9993
Class 3 0.0620 0.9380 0.0000
Class 4 0.1159 0.8841 0.0000
Class 5 0.3478 0.6522 0.0000
Class 6 0.0821 0.0000 0.9179
Class 7 0.9542 0.0458 0.0000
EDAD_MUJER_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0722 0.0356 0.2905 0.3065 0.2557 0.0394
Class 2 0.0017 0.3683 0.2947 0.2090 0.1070 0.0192
Class 3 0.0000 0.0000 0.0934 0.3127 0.5220 0.0719
Class 4 0.0000 0.0457 0.2968 0.3225 0.2667 0.0684
Class 5 0.0000 0.0170 0.3056 0.3079 0.3124 0.0571
Class 6 0.0000 0.1752 0.3621 0.2475 0.1861 0.0290
Class 7 0.0000 0.0125 0.2461 0.4151 0.2885 0.0377
PAIS_ORIGEN_REC
Y = 1 Y = 2 Y = 3
Class 1 0.0764 0.8295 0.0941
Class 2 0.0000 0.9918 0.0082
Class 3 0.0000 0.9208 0.0792
Class 4 0.0000 0.9707 0.0293
Class 5 0.0000 0.0000 1.0000
Class 6 0.0000 0.0000 1.0000
Class 7 0.0000 0.8762 0.1238
HITO1_EDAD_GEST_SEM_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0463 0.0169 0.1363 0.2382 0.2734 0.2888
Class 2 0.0087 0.7791 0.2122 0.0000 0.0000 0.0000
Class 3 0.0161 0.0026 0.6401 0.2457 0.0839 0.0116
Class 4 0.0001 0.0000 0.2816 0.3728 0.2514 0.0941
Class 5 0.0030 0.0054 0.3133 0.3895 0.2294 0.0594
Class 6 0.0234 0.7866 0.1900 0.0000 0.0000 0.0000
Class 7 0.0758 0.1941 0.2365 0.4936 0.0000 0.0000
MACROZONA
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5 Y = 6
Class 1 0.0339 0.1885 0.3686 0.1474 0.0926 0.1689
Class 2 0.0035 0.3852 0.1756 0.1450 0.0863 0.2043
Class 3 0.0000 0.6825 0.1090 0.0540 0.0771 0.0774
Class 4 0.0000 0.2990 0.1769 0.2343 0.0847 0.2051
Class 5 0.0000 0.6330 0.0722 0.0149 0.2595 0.0204
Class 6 0.0059 0.5595 0.0807 0.0164 0.2970 0.0405
Class 7 0.0000 0.4054 0.1552 0.1667 0.1134 0.1593
PREV_TRAMO_REC
Y = 1 Y = 2 Y = 3 Y = 4 Y = 5
Class 1 0.0199 0.0420 0.6233 0.2658 0.0490
Class 2 0.0018 0.0701 0.7233 0.1995 0.0053
Class 3 0.0000 0.5797 0.1642 0.2491 0.0069
Class 4 0.0012 0.0255 0.6366 0.3367 0.0000
Class 5 0.0063 0.0000 0.6108 0.2796 0.1033
Class 6 0.0293 0.0042 0.5217 0.1609 0.2839
Class 7 0.0000 0.0917 0.5573 0.3510 0.0000
rho_glca_adj<-
do.call("bind_rows",best_model_glca_w_distal_outcome$param$rho$ALL) %>%
t() %>%
round(2) %>%
data.table::data.table(keep.rownames = T) %>%
magrittr::set_colnames(c("variables", paste0("Class",1:7))) %>%
tidyr::separate(variables, into=c("var", "prob"), sep=".Y =")
lcmodel_glca_adj <- reshape2::melt(rho_glca_adj, level=2) %>% dplyr::rename("class"="variable")
lcmodel_glca_adj<- lcmodel_glca_adj %>%
dplyr::mutate(pr=as.numeric(gsub("[^0-9.]+", "", prob))) %>%
dplyr::left_join(traductor_cats[,c("charactersitic", "level", "CATEGORIA")], by= c("var"="charactersitic", "pr"="level")) %>%
dplyr::mutate(CATEGORIA= dplyr::case_when(var=="AÑO" & prob==" 1"~"Perdidos", T~CATEGORIA))
lcmodel_glca_adj$text_label<-paste0("Categoria:",lcmodel_glca_adj$CATEGORIA,"<br>%: ",scales::percent(lcmodel_glca_adj$value))
zp4 <- ggplot(lcmodel_glca_adj,aes(x = var, y = value, fill = factor(pr), label=text_label))
zp4 <- zp4 + geom_bar(stat = "identity", position = "stack")
zp4 <- zp4 + facet_grid(class ~ .)
zp4 <- zp4 + scale_fill_brewer(type="seq", palette="Greys", na.value = "white") +theme_bw()
zp4 <- zp4 + labs(y = "Porcentaje de probabilidad de respuesta",
x = "",
fill ="Cateorías de\nRespuesta")
zp4 <- zp4 + theme( axis.text.y=element_blank(),
axis.ticks.y=element_blank(),
panel.grid.major.y=element_blank())
zp4 <- zp4 + guides(fill = guide_legend(reverse=TRUE))
zp4 <- zp4 + theme(axis.text.x = element_text(angle = 30, hjust = 1))
#print(zp1)
ggplotly(zp4, tooltip = c("text_label"))%>% layout(xaxis= list(showticklabels = T),height=600, width=800)
Figure 10: Selected Model
Figure 11: Selected Model
Figure 12: Selected Model
Figure 13: Selected Model
Figure 14: Selected Model
Figure 15: Selected Model
Figure 16: Selected Model
Figure 17: Selected Model
Figure 18: Selected Model
glca_gam_dist_outcome<- best_model_glca_w_distal_outcome$param$gamma[[1]]
glca_gam_dist_outcome<-colnames(paste0("Class",1:7))
#conditional probabilities
#Pr(B1=1|Class 3)
posteriors_glca_adj <- data.frame(best_model_glca_w_distal_outcome$posterior,
predclass=best_model_glca_w_distal_outcome$param$gamma)
classification_table_adj <- plyr::ddply(posteriors, "predclass", function(x) colSums(x[,1:length(LCA_best_model_adj_mod$P)]))
clasification_errors_adj<- 1 - sum(diag(as.matrix(classification_table_adj[,2:(length(LCA_best_model_adj_mod$P)+1)]))) / sum(classification_table_adj[,2:(length(LCA_best_model_adj_mod$P)+1)])
warning(paste("Error de clasificación: ", round(clasification_errors_adj,2)))
entropy_alt <- function(p) sum(-p * log(p))
error_prior_adj <- entropy_alt(LCA_best_model_adj_mod$P) # Class proportions
error_post_adj <- mean(apply(LCA_best_model_adj_mod$posterior, 1, entropy_alt),na.rm=T)
R2_entropy_alt_adj <- (error_prior_adj - error_post_adj) / error_prior_adj
warning(paste("Entropía: ", round(R2_entropy_alt_adj,2)))
#https://stackoverflow.com/questions/72783185/entropy-calculation-gives-nan-is-applying-na-omit-a-valid-tweak
entropy.safe <- function (p) {
if (any(p > 1 | p < 0)) stop("probability must be between 0 and 1")
log.p <- numeric(length(p))
safe <- p != 0
log.p[safe] <- log(p[safe])
sum(-p * log.p)
}
error_prior2_adj <- entropy.safe(LCA_best_model_adj_mod$P) # Class proportions
error_post2_adj <- mean(apply(LCA_best_model_adj_mod$posterior, 1, entropy.safe),na.rm=T)
R2_entropy_safe_adj <- (error_prior2_adj - error_post2_adj) / error_prior2_adj
warning(paste("Entropía segura: ", round(R2_entropy_safe,2)))
#https://gist.github.com/daob/c2b6d83815ddd57cde3cebfdc2c267b3
warning(paste("Entropía (solución Oberski): ", round(entropy.R2(LCA_best_model_adj_mod),2)))
#\#minimum average posterior robability of cluster membership (\>0.7), interpretability (classes are clearly distinguishable), and parsimony (each class has a sufficient sample size for further analysis; n≥50).
Class 1 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.4947 -0.7037 0.4399 -1.600 0.109738
outcome1 4.2829 1.4546 0.4211 3.454 0.000558 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 2 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 2.9162 1.0703 0.2358 4.538 5.85e-06 ***
outcome1 0.7958 -0.2284 0.2374 -0.962 0.336
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 3 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 1.06401 0.06205 0.25326 0.245 0.8065
outcome1 1.86829 0.62502 0.26042 2.400 0.0164 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 4 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 7.2901 1.9865 0.2150 9.238 < 2e-16 ***
outcome1 0.4352 -0.8320 0.2203 -3.777 0.000161 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Class 5 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.5952 -0.5188 0.3877 -1.338 0.181
outcome1 1.3222 0.2793 0.3866 0.722 0.470
Class 6 / 7 :
Odds Ratio Coefficient Std. Error t value Pr(>|t|)
(Intercept) 0.2228 -1.5013 0.4144 -3.623 0.000296 ***
outcome1 2.7101 0.9970 0.4239 2.352 0.018721 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
save.image("data2_lca3_glca_sin_po.RData")
require(tidyverse)
sesion_info <- devtools::session_info()
dplyr::select(
tibble::as_tibble(sesion_info$packages),
c(package, loadedversion, source)
) %>%
DT::datatable(filter = 'top', colnames = c('Row number' =1,'Variable' = 2, 'Percentage'= 3),
caption = htmltools::tags$caption(
style = 'caption-side: top; text-align: left;',
'', htmltools::em('Packages')),
options=list(
initComplete = htmlwidgets::JS(
"function(settings, json) {",
"$(this.api().tables().body()).css({
'font-family': 'Helvetica Neue',
'font-size': '50%',
'code-inline-font-size': '15%',
'white-space': 'nowrap',
'line-height': '0.75em',
'min-height': '0.5em'
});",#;
"}")))